Search Results

Search found 22481 results on 900 pages for 'andy may'.

Page 664/900 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • PHP / SSH2 Multi-threading

    - by Asad Moeen
    I'm basically done using SSH2 with PHP. Some may already that while using it, the PHP code actually waits for all the listed commands to be executed in SSH and when everything is done, it then gives back the results. Where that is fine for the work I am doing, but I need some commands to be multi-threaded. $cmd= MyCommand; echo $ssh-exec($cmd); So I just want this to run in parallel 2 times. I googled some stuff but didn't get along with it. For a basic thing, I came across to this way posted by someone but it didn't work out for me. for ($i = 0; $i < 2; $i += 1) { exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go to next iteration of the loop immediately without waiting the completion of the script in the test_1.php , $i is passed as argument . } I tried to put it this way exec("echo $ssh-exec($cmd) $i test.txt &"); in the loop but either it never entered the loop or the echo $ssh-exec failed. I don't really need a very neat multi-threading. Even a single second delay would do good, thank you.

    Read the article

  • What is the 'best practice' for installing perl modules on Solaris/OpenSolaris?

    - by AndrewR
    I'm currently in the process of writing setup instructions for some software I've written that is implemented as a set of Perl modules. Having done this for various flavours of Linux, I'm now doing the same for Solaris/OpenSolaris (v10 only). Part of the setup process is to make sure that dependent Perl modules are installed. This has been pretty easy on Linux as the Perl modules I require tend to be within the distro's packaging system (eg yum install perl-Cache-Cache). This is not the case on Solaris so I'm working on setup instructions that use the CPAN module to fetch dependent modules (eg perl -MCPAN -e 'install Cache::Cache'). This works ok but there are known problems with modules that require things to be built with a C compiler. The problem is that the C Makefile generated assumes you're using Sun's compiler and uses command-line options not understood by gcc, which you may be using instead. Consulting teh Internetz has thrown up a number of solutions to this: Install and use Sun's compiler Use the perlgcc wrapper script Edit the makefiles by hand (yuk) All of these work. My question to those more familiar with Solaris than me is: Is one of these the 'best' or 'most commonly used' method?

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • Metacity/Compiz not staring upon Login Ubuntu 10.10

    - by Ryan Lanciaux
    TLDR: As of this afternoon, I do not have a window manager when I login to Ubuntu 10.10. I would like to have window manager on login without needing to add to startup. Just started using linux again as my home OS. (Used it for a long time years ago but been on windows up until this past weekend) so this may be kind of n00b-ish :) Anyways, up until today, everything on my machine was running okay. I did not have compiz running as the default wm because I'm running NVidia Drivers and Xinerama (and as I understand Xinerama & Compiz don't work well together). I made no changes to my xorg / etc but today when I logged in, I had to manually start metacity from command line to get any window manager. Really not sure what would be causing this or what I can do to get it working again. My xorg.conf is available here: https://gist.github.com/845618. My default Window Manager is set to /usr/bin/metacity in Configuration Editor under /desktop/gnome/applications/window_manager. p.s. Any tips on how to run 3 monitors where I can move windows between screens without Xinerama would be appreciated but that's prolly for another thread :)

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • Games, Windows 8.1 and 144Hz display

    - by Marioysikax
    So I have been having problems with few games after switching from 7 to 8.1, which seems to be related to my 144Hz monitor. Few examples: Shank, Shank 2, Blood of the Werewolf, Astebreed, the Sims 2 and Rayman Legends patch 1.2Had few other as well but it's been long and I have 600+ steam library. From those games at least the Sims 2 and Shank worked without any problems with same setup and Windows 7. So basically these games simply refuse to launch with basic setup. However if I plug 60Hz TV with HDMI instead everything magically starts to work. As for Astebreed and the Sims 2 using windowed mode seems to also work. As for Rayman Demo and version 1.0 works for some reason and 1.2 breaks settings menu. I have already tried contacting supports. EA support stated game simply shouldn't work with 8.1 at all (which is lie as it works with that TV and friend with 8 plays just fine), ubisoft support took few weeks and support said he will forward info for further processing, blood of the werewolf support had no idea what's going on and told me to just use my TV instead.Changing monitors refresh rate to 120Hz or forcing it to 60Hz doesn't do anything. I have DVI right now but I will try with DisplayPort when I get the cable. At PCGW Garrett said it may have something to do how listing resolutions work with 8 compared to earlier Windows versions but my googling skills don't bring anything up and compatibility mode for earlier windows version doesn't work either (not that I expected that to work). My system specs are on my steam profile. How do I get those work with my 144Hz monitor as well as possible future games having same problem? Downgrading to 7 would work but is far from practical and I don't own legit lisence for that one.

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • Looking for Remote Control that works with everything (even Windows 7 Media Center)

    - by T Reddy
    Using my Google-Fu, it seems that the most basic of things one gets with any DVR is the remote control. Had I known it would be difficult just to get a consumer IR receiver for Windows 7 I may not have bothered to build an HTPC. But too late, I already have the HTPC ready to go (minus the CETON card...) So I'm moving away from TiVo, I hate paying the monthly fees and my box is ancient. I'm looking for these solutions to my HTCP setup...I want to: Switch audio from HDMI to SPDIF via the remote control (i.e., switch from TV to Receiver) (as a side note, the built-in audio on the mobo has software to do this). Pressing the volume button on the remote will always change the TV's volume (or the Receiver's if possible) and NOT the PC's volume. The remote/receiver works well around 25 feet. Bonus if the IR Receiver can work with my existing TiVo remote (or other remotes laying around the house) I read a review of the Bluetooth TiVo remote...it sounds promising...but I'm not sure if it is great for Windows 7 HTPC?

    Read the article

  • Good visuals supporting adopting Macintosh in a Windows company

    - by jdmuys
    I work in a Windows only software service company, which just put up an internal contest for innovative ideas for the company. The idea I submitted is to let employees use a Mac instead of the mandatory PC if they wished to. My idea has been selected (among a few others) to reach the next stage of the contest. One of the items requested for the next stage is ONE visual that best illustrates the idea. While my pitch is rather good (I think), I have a hard time coming up with ONE visual that would be suggestive enough and not too fanboy-ish, or too restricted. That's why I am requesting suggestions. For reference, some of the points I intend to develop are (not in order): de facto safety (little or no malware) Apple as a company reached its leading position through innovation (bio)diversity is a source of value for a service company, that expands its reach. it makes financial sense the Mac is the most compatible machine, making it a lot easier to test our software (especially web sites). Some OS X technologies can be valuable to a software service company (eg Applescript) Some Apple tools can help us improve (eg Keynote) It's good citizenship for our company as Apple is now best in class according to Greenpeace. I realize this question may be out of topic here. I'd be happy to have suggestions on where to post this question. Please do not argue why OS X might be better or worse than Windows. My question is very narrow. Thanks.

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • Windows 7 loses correct time zone upon reboot

    - by Android Eve
    I have a standard PC running Windows 7 Ultimate (64-bit). For some reason, it refuses to keep the correct time zone (the BIOS battery is OK) when restarted. Note (1): The Time zone is correct. The "Internet Time" tab also shows "this computer is set to automatically synchronize with 'time.windows.com'. When I click the 'Change settings...' button, the 'Synchronize with an Internet time server' checkbox is checked. Still, upon reboot, the time is skewed by 6 hours... and doesn't correct itself even after waiting hours for this "automatically synchronize" to occur. Note (2): The BIOS time is set to local (i.e. not UTC). When I restart Windows 7 without booting to the other OS installed in dual-boot config (Ubuntu Linux), it seems to correctly remember the time. This may explain immediate time upon reboot, but it doesn't explain why Windows 7 won't automatically 'Synchronize with an Internet time server' even after an hour. Why is this happening and how do I correct this?

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • Internet Explorer will not open

    - by KCotreau
    I recently migrated a company to a Microsoft domain environment, logged the users in under their new domain accounts, and then copied the old profiles to the new profile. I am not sure if that is related since they did not complain about it right away and it may have been a subsequent patch or something, but I have two XP computers that will not open IE8. You click on it, and it nothing graphically happens at all, but you can see a process in task manager. If you click many times, you get multiple instances. It will appear often TWICE per click. It still works in the old profile, so it is specific to the profile, and I would like to fix it rather than blow it away. Here is what I have done without success: Tried opening without add-ons (the one in System Tools) Reinstalled IE8 Ran SFC /SCANNOW I found a script that was supposed to repair any registry entries, and ran it. I tried exporting the whole HKCU\Software\Microsoft\Internet Explorer key and deleting it, hoping that when I restarted it, it would recreate it...No joy. I restored it. Any ideas?

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • Openbsd init script for ssh VPN tunnel

    - by manthis
    I have a server hosting SSH tunnels and Openbsd 4.5 clients connecting to it. Things work just fine but I am in the need of automating the connection from the client to the server. So that if the client is accidentally rebooted, then the connection initiates unattended. So it should be as straight forward as to include the ssh connection in an init script. However I have miserably failed to do so by including it to /etc/rc.local, which is the file I usually do this sort of things in. Right now I am using autossh to also restart the connection if necessary and the script that I put on /etc/rc.local follows: #!/bin/sh # # Example script to start up tunnel with autossh. # # This script will tunnel 2200 from the remote host # to 22 on the local host. On remote host do: # ssh -p 2200 localhost # # $Id: autossh.host,v 1.6 2004/01/24 05:53:09 harding Exp $ # ID=root HOST=example.com #AUTOSSH_POLL=600 #AUTOSSH_PORT=20000 #AUTOSSH_GATETIME=30 #AUTOSSH_LOGFILE=$HOST.log #AUTOSSH_DEBUG=yes #AUTOSSH_PATH=/usr/local/bin/ssh export AUTOSSH_POLL AUTOSSH_LOGFILE AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -f -M 20000 ${ID}@${HOST} The script detaches just fine when run manually so I just include it on /etc/rc.local as echo -n 'starting local daemons:' if [ -x /usr/local/sbin/autossh.sh ]; then echo -n 'ssh tunnel' /usr/local/sbin/autossh.sh fi echo '.' I have also tried calling it from /etc/hostname.tun0 in case there may be issues with /etc/rc.local not being called at the right time when network connections are ready, so I would use: inet 10.254.254.2 255.255.255.252 10.254.254.1 !/usr/local/sbin/autossh.sh Your input is highly appreciated.

    Read the article

  • All applications quit when printing on imac OS 10.5.8

    - by Tamany
    Hello and hoping someone might be able to help. Recently ran software update (not sure if problems are associated with this but pretty sure they are as i printed successfully before update. checked log at time of printing: any thoughts on how to fix this? THANKYOU!! 03/05/2010 22:03:15 Microsoft Word[697] * -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x17a82b50 03/05/2010 22:03:15 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:17 [0x0-0x51051].com.microsoft.Word[697] Mon May 3 22:03:17 leopards-imac-2.local Word[697] : The function CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please useCGPDFPageGetBoxRect' instead. 03/05/2010 22:22:09 Microsoft Word[697] * -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x1b036500

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >